perm filename NUMBER.MSS[WHT,LSP] blob sn#754068 filedate 1984-05-12 generic text, type T, neo UTF8
@Part[Number, Root = "CLM.MSS"]
@Comment{Chapter of Common Lisp Manual.  Copyright 1984 Guy L. Steele Jr.⎇


@MyChapter[Numbers]
@Label{number⎇
@Index[number]

@clisp provides several different representations for numbers.
These representations may be divided into four categories: integers,
ratios, floating-point numbers, and complex numbers.  Many numeric
functions will accept any kind of number; they are @i[generic].  Other
functions accept only certain kinds of numbers.

In general, numbers in @clisp are not true objects; @f[eq] cannot
be counted upon to operate on them reliably.  In particular,
it is possible that the expression
@Lisp
(let ((x z) (y z)) (eq x y))
@Endlisp
may be false rather than true if the value of @f[z] is a number.
@Rationale{This odd breakdown of @f[eq] in the case of numbers
allows the implementor enough design freedom to produce exceptionally
efficient numerical code on conventional architectures.
@Maclisp requires this freedom, for example, in order to produce compiled
numerical code equal in speed to @c[fortran].
@clisp makes this same restriction,
if not for this freedom, then at least for the sake of compatibility.⎇
If two objects are to be compared for ``identity,'' but either might be
a number, then the predicate @Funref[eql] is probably appropriate;
if both objects are known to be numbers, then @Xfunref[X {=⎇, L {#&M⎇]
may be preferable.

@Section[Precision, Contagion, and Coercion]

In general,
computations with floating-point numbers are only approximate.
The @i[precision] of a floating-point number is not necessarily
correlated at all with the @i[accuracy] of that number.
For instance, 3.142857142857142857 is a more precise approximation
to @sail[π] than 3.14159, but the latter is more accurate.
The precision refers to the number of bits retained in the representation.
When an operation combines a short floating-point number with a long one,
the result will be a long floating-point number.  This rule is made
to ensure that as much accuracy as possible is preserved; however,
it is by no means a guarantee.
@clisp numerical routines do assume, however, that the accuracy of
an argument does not exceed its precision.  Therefore
when two small floating-point numbers
are combined, the result will always be a small floating-point number.
This assumption can be overridden by first explicitly converting
a small floating-point number to a larger representation.
(@clisp never converts automatically from a larger size to a smaller one.)

Rational computations cannot overflow in the usual sense
(though of course there may not be enough storage
to represent one), as integers and ratios may in principle be of any magnitude.
Floating-point computations may get exponent overflow or underflow;
this is an error.

When rational and floating-point numbers are compared or combined by
a numerical function, the rule of @i[floating-point contagion]
is followed: when a rational meets a floating-point number,
the rational is first converted to a floating-point number of
the same format.  For functions such as @f[+]
that take more than two arguments,
it may be that part of the operation is carried out exactly using
rationals and then the rest is done using floating-point arithmetic.

For functions that are mathematically associative (and possibly commutative),
a @clisp implementation may process the arguments in any manner consistent
with associative (and possibly commutative) rearrangement.
This does not affect the order in which the argument forms
are evaluated, of course; that order is always left to right,
as in all @clisp function calls.  What is left loose is the
order in which the argument values are processed.
The point of all this is that implementations may differ in 
which automatic coercions are applied because of differing
orders of argument processing.  As an example, consider this
expression:
@lisp
(+ 1/3 2/3 1.0D0 1.0 1.0E-15)
@endlisp
One implementation might process the arguments from left to right,
first adding @f[1/3] and @f[2/3] to get @f[1], then converting that
to a double-precision floating-point number for combination
with @f[1.0D0], then successively converting and adding @f[1.0] and
@f[1.0E-15].  Another implementation might process the arguments
from right to left, first performing a single-precision floating-point addition
of @f[1.0] and @f[1.0E-15] (and probably losing some accuracy
in the process!), then converting the sum to double precision
and adding @f[1.0D0], then converting @f[2/3] to double-precision
floating-point and adding it, and then converting @f[1/3] and adding that.
A third implementation might first scan all the arguments, process
all the rationals first to keep that part of the computation exact,
then find an argument of the largest floating-point format among all
the arguments and add that, and then add in all other arguments,
converting each in turn (all in a perhaps misguided attempt to make
the computation as accurate as possible).  In any case, all three
strategies are legitimate.  The user can of course control the order of
processing explicitly by writing several calls; for example:
@lisp
(+ (+ 1/3 2/3) (+ 1.0D0 1.0E-15) 1.0)
@endlisp
The user can also control all coercions simply by writing calls
to coercion functions explicitly.

In general, then, the type of the result of a numerical function
is a floating-point number of the largest format among all the
floating-point arguments to the function; but if the arguments
are all rational, then the result is rational (except for functions
that can produce mathematically irrational results, in which case
a single-format floating-point number may result).

There is a separate rule of complex contagion.
As a rule, complex numbers never result from a numerical function
unless one or more of the
arguments is complex.  (Exceptions to this
rule occur among the irrational and transcendental functions,
specifically @Funref[expt], @Funref[log], @Funref[sqrt],
@Funref[asin], @Funref[acos], @Funref[acosh], and @Funref[atanh];
see section @ref[TRANSCENDENTAL-SECTION].)
When a non-complex number meets a complex number, the non-complex
number is in effect first converted to a complex number by providing an
imaginary part of @f[0].

If any computation produces a result that is a ratio of
two integers such that the denominator evenly divides the
numerator, then the result is immediately converted to the equivalent
integer.  This is called the rule of @i[rational canonicalization].

If the result of any computation would be a complex rational
with a zero imaginary part, the result is immediately
converted to a non-complex rational number by taking the
real part.  This is called the rule of @i[complex canonicalization].
Note that this rule does @i[not] apply to complex numbers whose components
are floating-point numbers.  Whereas @f[#C(5 0)] and @f[5] are not
distinct values in @clisp (they are always @f[eql]),
@f[#C(5.0 0.0)] and @f[5.0] are always distinct values in @clisp
(they are never @f[eql], although they are @f[equalp]).

@Section[Predicates on Numbers]

Each of the following functions tests a single number for
a specific property.
Each function requires that its argument be
a number; to call one with a non-number is an error.

@Defun[Fun {zerop⎇, Args {@i[number]⎇]
This predicate is true if @i[number] is zero (either the integer zero,
a floating-point zero, or a complex zero), and is false otherwise.
Regardless of whether an implementation provides distinct representations
for positive and negative floating-point zeros,
@f[(zerop -0.0)] is always true.
It is an error if the argument @i[number] is not a number.
@Enddefun

@Defun[Fun {plusp⎇, Args {@i[number]⎇]
This predicate is true if @i[number] is strictly greater than zero,
and is false otherwise.
It is an error if the argument @i[number] is not a non-complex number.
@Enddefun

@Defun[Fun {minusp⎇, Args {@i[number]⎇]
This predicate is true if @i[number] is strictly less than zero,
and is false otherwise.
Regardless of whether an implementation provides distinct representations
for positive and negative floating-point zeros,
@f[(minusp -0.0)] is always false.
(The function @Funref[float-sign] may be used to distinguish a negative zero.)
It is an error if the argument @i[number] is not a non-complex number.
@Enddefun

@Defun[Fun {oddp⎇, Args {@i[integer]⎇]
This predicate is true if the argument @i[integer] is odd (not divisible
by two), and otherwise is false.  It is an error if the argument is not
an integer.
@Enddefun

@Defun[Fun {evenp⎇, Args {@i[integer]⎇]
This predicate is true if the argument @i[integer] is even (divisible
by two), and otherwise is false.  It is an error if the argument is not
an integer.
@Enddefun

See also the data-type predicates @Funref[integerp],
@Funref[rationalp], @Funref[floatp], @Funref[complexp], and @Funref[numberp].

@Section[Comparisons on Numbers]

Each of the functions in this section requires that its arguments all be
numbers; to call one with a non-number is an error.  Unless otherwise
specified, each works on all types of numbers, automatically performing
any required coercions when arguments are of different types.

@Defun[Fun {=⎇, Funlabel {#&M⎇, Args {@i[number] @rest @i[more-numbers]⎇]
@Defun1[Fun {/=⎇, Funlabel {#O#&M⎇, Args {@i[number] @rest @i[more-numbers]⎇]
@Defun1[Fun {<⎇, Funlabel {#&L⎇, Args {@i[number] @rest @i[more-numbers]⎇]
@Defun1[Fun {>⎇, Funlabel {#&N⎇, Args {@i[number] @rest @i[more-numbers]⎇]
@Defun1[Fun {<=⎇, Funlabel {#&L#&M⎇, Args {@i[number] @rest @i[more-numbers]⎇]
@Defun1[Fun {>=⎇, Funlabel {#&N#&M⎇, Args {@i[number] @rest @i[more-numbers]⎇]
These functions each take one or more arguments.  If the sequence
of arguments satisfies a certain condition:
@Begin[Format]
@Tabclear
@Tabset[+10,+10]
@\@f[=]@\all the same
@\@f[/=]@\all different
@\@f[<]@\monotonically increasing
@\@f[>]@\monotonically decreasing
@\@f[<=]@\monotonically nondecreasing
@\@f[>=]@\monotonically nonincreasing
@End[Format]
then the predicate is true, and otherwise is false.
Complex numbers may be compared using @f[=] and @f[/=],
but the others require non-complex arguments.
Two complex numbers are considered equal by @f[=]
if their real parts are equal and their imaginary parts are equal
according to @f[=].
A complex number may be compared to a non-complex number with @f[=] or @f[/=].
For example:
@lisp
@tabdivide[2]
(= 3 3) @r[is true.]@\(/= 3 3) @r[is false.]
(= 3 5) @r[is false.]@\(/= 3 5) @r[is true.]
(= 3 3 3 3) @r[is true.]@\(/= 3 3 3 3) @r[is false.]
(= 3 3 5 3) @r[is false.]@\(/= 3 3 5 3) @r[is false.]
(= 3 6 5 2) @r[is false.]@\(/= 3 6 5 2) @r[is true.]
(= 3 2 3) @r[is false.]@\(/= 3 2 3) @r[is false.]
(< 3 5) @r[is true.]@\(<= 3 5) @r[is true.]
(< 3 -5) @r[is false.]@\(<= 3 -5) @r[is false.]
(< 3 3) @r[is false.]@\(<= 3 3) @r[is true.]
(< 0 3 4 6 7) @r[is true.]@\(<= 0 3 4 6 7) @r[is true.]
(< 0 3 4 4 6) @r[is false.]@\(<= 0 3 4 4 6) @r[is true.]
(> 4 3) @r[is true.]@\(>= 4 3) @r[is true.]
(> 4 3 2 1 0) @r[is true.]@\(>= 4 3 2 1 0) @r[is true.]
(> 4 3 3 2 0) @r[is false.]@\(>= 4 3 3 2 0) @r[is true.]
(> 4 3 1 2 0) @r[is false.]@\(>= 4 3 1 2 0) @r[is false.]
(= 3) @r[is true.]@\(/= 3) @r[is true.]
(< 3) @r[is true.]@\(<= 3) @r[is true.]
(= 3.0 #C(3.0 0.0)) @r[is true.]@\(/= 3.0 #C(3.0 1.0)) @r[is true.]
(= 3 3.0) @r[is true.]@\(= 3.0s0 3.0d0) @r[is true.]
(= 0.0 -0.0) @r[is true.]@\(= 5/2 2.5) @r[is true.]
(> 0.0 -0.0) @r[is false.]@\(= 0 -0.0) @r[is true.]
@Endlisp
With two arguments, these functions perform the usual arithmetic
comparison tests.
With three or more arguments, they are useful for range checks.
For example:
@lisp
(<= 0 x 9)	       ;@r[true if @f[x] is between 0 and 9, inclusive]
(< 0.0 x 1.0)	       ;@r[true if @f[x] is between 0.0 and 1.0, exclusive]
(< -1 j (length s))    ;@r[true if @f[j] is a valid index for @f[s]]
(<= 0 j k (- (length s) 1))	;@r[true if @f[j] and @f[k] are each valid]
				;  @r[indices for @f[s] and also @f[j]@Sail[≤]@f[k]]
@Endlisp

@Rationale{The ``unequality'' relation is called @f[/=] rather than
@f[<>]
(the name used in @pascal) for two reasons.  First, @f[/=] of more than two
arguments is not the same as the @f[or] of @f[<] and @f[>] of those same
arguments.  Second, unequality is meaningful for complex numbers even though
@f[<] and @f[>] are not.  For both reasons it would be misleading to
associate unequality with the names of @f[<] and @f[>].⎇

@Incompatibility{In @clisp, the comparison operations
perform ``mixed-mode'' comparisons: @f[(= 3 3.0)] is true.  In @maclisp,
there must be exactly two arguments, and they must be either both fixnums
or both floating-point numbers.  To compare two numbers for numerical
equality and type equality, use @Funref[eql].⎇
@Enddefun

@Defun[Fun {max⎇, Args {@i[number] @rest @i[more-numbers]⎇]
@Defun1[Fun {min⎇, Args {@i[number] @rest @i[more-numbers]⎇]
The arguments may be any non-complex numbers.
@f[max] returns the argument that is greatest (closest
to positive infinity).
@f[min] returns the argument that is least (closest to
negative infinity).

For @f[max],
if the arguments are a mixture of rationals and floating-point
numbers, and the largest argument
is a rational, then the implementation is free to
produce either that rational or its floating-point approximation;
if the largest argument is a floating-point number of a smaller format
than the largest format of any floating-point argument,
then the implementation is free to
return the argument in its given format or expanded to the larger format.
More concisely, the implementation has the choice of returning the largest
argument as is or applying the rules of floating-point contagion,
taking all the arguments into consideration for contagion purposes.
Also, if one or more of the arguments are equal, then any one
of them may be chosen as the value to return.
Similar remarks apply to @f[min] (replacing ``largest argument'' by
``smallest argument'').

@lisp
@tabdivide[2]
(max 6 12) @EV 12@\(min 6 12) @EV 6
(max -6 -12) @EV -6@\(min -6 -12) @EV -12
(max 1 3 2 -7) @EV 3@\(min 1 3 2 -7) @EV -7
(max -2 3 0 7) @EV 7@\(min -2 3 0 7) @EV -2
(max 3) @EV 3@\(min 3) @EV 3
(max 5.0 2) @EV 5.0@\(min 5.0 2) @EV 2 @i[or] 2.0
(max 3.0 7 1) @EV 7 @i[or] 7.0@\(min 3.0 7 1) @EV 1 @i[or] 1.0
(max 1.0s0 7.0d0) @EV 7.0d0
(min 1.0s0 7.0d0) @EV 1.0s0 @i[or] 1.0d0
(max 3 1 1.0s0 1.0d0) @EV 3 @i[or] 3.0d0
(min 3 1 1.0s0 1.0d0) @EV 1 @i[or] 1.0s0 @i[or] 1.0d0
@Endlisp
@Enddefun

@Section[Arithmetic Operations]

Each of the functions in this section requires that its arguments all be
numbers; to call one with a non-number is an error.  Unless otherwise
specified, each works on all types of numbers, automatically performing
any required coercions when arguments are of different types.

@Defun[Fun {+⎇, Funlabel {#K⎇, Args {@rest @i[numbers]⎇]
This returns the sum of the arguments.  If there are no arguments, the result
is @f[0], which is an identity for this operation.

@Incompatibility{While @f[+] is compatible with its use in @lmlisp,
it is incompatible with @maclisp, which uses @f[+] for fixnum-only
addition.⎇
@Enddefun

@Defun[Fun {-⎇, Args {@i[number] @rest @i[more-numbers]⎇]
The function @f[-], when given one argument, returns the negative
of that argument.

The function @f[-], when given more than one argument, successively subtracts
from the first argument all the others, and returns the result.
For example, @f[(- 3 4 5)] @EV @f[-6].

@Incompatibility{While @f[-] is compatible with its use in @lmlisp,
it is incompatible with @maclisp, which uses @f[-] for fixnum-only
subtraction.
Also, @f[-] differs from @f[difference] as used in most @xlisp
systems in the case of one argument.⎇
@Enddefun

@Defun[Fun {*⎇, Args {@rest @i[numbers]⎇]
This returns the product of the arguments.
If there are no arguments, the result
is @f[1], which is an identity for this operation.

@Incompatibility{While @f[*] is compatible with its use in @lmlisp,
it is incompatible with @maclisp, which uses @f[*] for fixnum-only
multiplication.⎇
@Enddefun

@Defun[Fun {/⎇, Funlabel {#O⎇, Args {@i[number] @rest @i[more-numbers]⎇]
The function @f[/], when given more than one argument, successively divides
the first argument by all the others and returns the result.

With one argument, @f[/] reciprocates the argument.

@f[/] will produce a ratio if the mathematical quotient of two integers
is not an exact integer.  For example:
@Lisp
(/ 12 4) @EV 3
(/ 13 4) @EV 13/4
(/ -8) @EV -1/8
(/ 3 4 5) @EV 3/20
@Endlisp
To divide one integer by another producing an integer result,
use one of the functions @f[floor], @f[ceiling], @f[truncate],
or @Funref[round].

If any argument is a floating-point number,
then the rules of floating-point contagion apply.

@Incompatibility{What @f[/] does is totally unlike what the usual
@f[//] or @f[quotient] operator does.  In most @xlisp systems,
@f[quotient] behaves like @f[/] except when dividing integers,
in which case it behaves like @Funref[truncate] of two arguments;
this behavior is mathematically intractable, leading to such
anomalies as
@Lisp
(quotient 1.0 2.0) @EV 0.5   @r[but]   (quotient 1 2) @EV 0
@Endlisp
In contrast, the @clisp function @f[/] produces these results:
@Lisp
(/ 1.0 2.0) @EV 0.5          @r[and]   (/ 1 2) @EV 1/2
@Endlisp
In practice @f[quotient] is used only when one is sure that both arguments
are integers, @i[or] when one is sure that at least one argument
is a floating-point number.  @f[/] is tractable for its purpose
and ``works'' for @i[any] numbers.⎇
@Enddefun

@Defun[Fun {1+⎇, Funlabel {1#K⎇, Args {@i[number]⎇]
@Defun1[Fun {1-⎇, Args {@i[number]⎇]
@f[(1+ x)] is the same as @f[(+ x 1)].

@f[(1- x)] is the same as @f[(- x 1)].
Note that the short name may be confusing: @f[(1- x)] does @i[not] mean
1@Minussign@;@i[x]; rather, it means @i[x]@Minussign@;1.
@Rationale{These are included primarily for compatibility with @maclisp
and @lmlisp.  Some programmers prefer always to write @f[(+ x 1)] and
@f[(- x 1)] instead of @f[(1+ x)] and @f[(1- x)].⎇
@Implementation{Compiler writers are very strongly encouraged to ensure
that @f[(1+ x)] and @f[(+ x 1)] compile into identical code, and
similarly for @f[(1- x)] and @f[(- x 1)], to avoid pressure on a @xlisp
programmer to write possibly less clear code for the sake of efficiency.
This can easily be done as a source-language transformation.⎇
@Enddefun

@Defmac[Fun {incf⎇, Args {@i[place] @Mopt<@i[delta]>⎇]
@Defmac1[Fun {decf⎇, Args {@i[place] @Mopt<@i[delta]>⎇]
The number produced by the form @i[delta]
is added to (@f[incf]) or subtracted from (@f[decf])
the number in the generalized variable named by @i[place] ,
and the sum is stored back into @i[place] and returned.
The form @i[place] may be any form acceptable
as a generalized variable to @Macref[setf].
If @i[delta] is not supplied, then the number in @i[place] is changed
by @f[1].
For example:
@lisp
(setq n 0)
(incf n) @EV 1      @r[and now] n @EV 1
(decf n 3) @EV -2   @r[and now] n @EV -2
(decf n -5) @EV 3   @r[and now] n @EV 3
(decf n) @EV 2      @r[and now] n @EV 2
@Endlisp
The effect of @f[(incf @i[place] @i[delta])]
is roughly equivalent to
@Lisp
(setf @i[place] (+ @i[place] @i[delta]))
@Endlisp
except that the latter would evaluate any subforms of @i[place]
twice, whereas @f[incf] takes care to evaluate them only once.
Moreover, for certain @i[place] forms @f[incf] may be
significantly more efficient than the @f[setf] version.
@Enddefun

@Defun[Fun {conjugate⎇, Args {@i[number]⎇]
This returns the complex conjugate of @i[number].  The conjugate
of a non-complex number is itself.  For a complex number @f[z],
@Lisp
(conjugate z) @EQ (complex (realpart z) (- (imagpart z)))
@Endlisp
For example:
@lisp
(conjugate #C(3/5 4/5)) @EV #C(3/5 -4/5)
(conjugate #C(0.0D0 -1.0D0)) @EV #C(0.0D0 1.0D0)
(conjugate 3.7) @EV 3.7
@endlisp
@Enddefun

@Defun[Fun {gcd⎇, Args {@rest @i[integers]⎇]
This returns the greatest common divisor of all the arguments,
which must be integers.  The result of @f[gcd] is always a non-negative
integer.
If one argument is given, its absolute value is returned.
If no arguments are given, @f[gcd] returns @f[0],
which is an identity for this operation.
For three or more arguments,
@Lisp
(gcd @i[a] @i[b] @i[c] ... @i[z]) @EQ (gcd (gcd @i[a] @i[b]) @i[c] ... @i[z])
@Endlisp

Here are some examples of the use of @f[gcd]:
@lisp
(gcd 91 -49) @EV 7
(gcd 63 -42 35) @EV 7
(gcd 5) @EV 5
(gcd -4) @EV 4
(gcd) @EV 0
@Endlisp
@Enddefun

@Defun[Fun {lcm⎇, Args {@i[integer] @rest @i[more-integers]⎇]
This returns the least common multiple of its arguments,
which must be integers.
The result of @f[lcm] is always a non-negative integer.
For two arguments that are not both zero,
@lisp
(lcm @i[a] @i[b]) @EQ (/ (abs (* @i[a] @i[b])) (gcd @i[a] @i[b]))
@Endlisp
If one or both arguments are zero,
@lisp
(lcm @i[a] 0) @EQ (lcm 0 @i[a]) @EQ 0
@endlisp

For one argument, @f[lcm] returns the absolute value of that argument.
For three or more arguments,
@Lisp
(lcm @i[a] @i[b] @i[c] ... @i[z]) @EQ (lcm (lcm @i[a] @i[b]) @i[c] ... @i[z])
@Endlisp

Some examples:
@lisp
(lcm 14 35) @EV 70
(lcm 0 5) @EV 0
(lcm 1 2 3 4 5 6) @EV 60
@Endlisp

Mathematically, @f[(lcm)] should return infinity.  Because @clisp
does not have a representation for infinity, @f[lcm], unlike @f[gcd],
always requires at least one argument.
@Enddefun

@Section[Irrational and Transcendental Functions]
@label[TRANSCENDENTAL-SECTION]

@clisp provides no data type that can accurately represent irrational
numerical values.
The functions in this section are described as if the results
were mathematically accurate, but actually they all produce floating-point
approximations to the true mathematical result in the general case.
In some places
mathematical identities are set forth that are intended to elucidate the
meanings of the functions; however, two mathematically identical
expressions may be computationally different because of errors
inherent in the floating-point approximation process.

When the arguments to
a function in this section are all rational and the true mathematical result
is also (mathematically) rational, then unless otherwise noted
an implementation is free to return either an accurate result of
type @f[rational] or a single-precision floating-point approximation.
If the arguments are all rational but the result cannot be expressed
as a rational number, then a single-precision floating-point
approximation is always returned.

The rules of floating-point contagion and complex contagion are 
effectively obeyed by all the functions in this section except @f[expt],
which treats some cases of rational exponents specially.
When, possibly after contagious conversion, all of the arguments are of
the same floating-point or complex floating-point type,
then the result will be of that same type unless otherwise noted.

@Implementation{There is a ``floating-point cookbook'' by
Cody and Waite @Cite[CODY-AND-WAITE] that may be a useful aid
in implementing the functions defined in this section.⎇

@Subsection[Exponential and Logarithmic Functions]

Along with the usual one-argument and two-argument exponential and
logarithm functions, @f[sqrt] is considered to be an exponential
function, because it raises a number to the power 1/2.

@Defun[Fun {exp⎇, Args {@i[number]⎇]
Returns @i[e] raised to the power @i[number],
where @i[e] is the base of the natural logarithms.
@Enddefun

@Defun[Fun {expt⎇, Args {@i[base-number] @i[power-number]⎇]
Returns @i[base-number] raised to the power @i[power-number].
If the @i[base-number] is of type @f[rational] and the @i[power-number] is
an integer,
the calculation will be exact and the result will be of type @f[rational];
otherwise a floating-point approximation may result.

When @i[power-number] is @f[0] (a zero of type integer),
then the result is always the value one in the type of @i[base-number],
even if the @i[base-number] is zero (of any type).  That is:
@lisp
(expt @i[x] 0) @EQ (coerce 1 (type-of @i[x]))
@endlisp
If the @i[power-number] is a zero of any other data type,
then the result is also the value one, in the type of the arguments
after the application of the contagion rules, with one exception:
it is an error if @i[base-number] is zero when the @i[power-number]
is a zero not of type integer.

Implementations of @f[expt] are permitted to use different algorithms
for the cases of a rational @i[power-number] and a floating-point
@i[power-number]; the motivation is that in many cases greater accuracy
can be achieved for the case of a rational @i[power-number].
For example, @f[(expt pi 16)] and @f[(expt pi 16.0)] may yield
slightly different results if the first case is computed by repeated squaring
and the second by the use of logarithms.  Similarly, an implementation
might choose to compute @f[(expt x 3/2)] as if it had
been written @f[(sqrt (expt x 3))], perhaps producing a more accurate
result than would @f[(expt x 1.5)].  It is left to the implementor
to determine the best strategies.

The result of @f[expt] can be a complex number, even when neither argument
is complex, if @i[base-number] is negative and @i[power-number]
is not an integer.  The result is always the principal complex value.
Note that @f[(expt -8 1/3)] is not permitted to return @f[-2];
while @f[-2] is indeed one of the cube roots of @f[-8], it is
not the principal cube root, which is a complex number
approximately equal to @f[#C(0.5 1.73205)].
@Enddefun

@Defun[Fun {log⎇, Args {@i[number] @optional @i[base]⎇]
Returns the logarithm of @i[number] in the base @i[base],
which defaults to @i[e], the base of the natural logarithms.
For example:
@lisp
(log 8.0 2) @EV 3.0
(log 100.0 10) @EV 2.0
@Endlisp
The result of @f[(log 8 2)] may be either @f[3] or @f[3.0], depending on the
implementation.

Note that @f[log] may return a complex result when given a non-complex
argument if the argument is negative.  For example:
@lisp
(log -1.0) @EQ (complex 0.0 (float pi 0.0))
@endlisp
@Enddefun

@Defun[Fun {sqrt⎇, Args {@i[number]⎇]
Returns the principal square root of @i[number].
If the @i[number] is not complex but is negative, then the result
will be a complex number.
For example:
@lisp
(sqrt 9.0) @EV 3.0
(sqrt -9.0) @EV #c(0.0 3.0)
@endlisp
The result of @f[(sqrt 9)] may be either @f[3] or @f[3.0], depending on the
implementation.  The result of @f[(sqrt -9)] may be either @f[#c(0 3)]
or @f[#c(0.0 3.0)].
@Enddefun

@Defun[Fun {isqrt⎇, Args {@i[integer]⎇]
Integer square root: the argument must be a non-negative integer, and the
result is the greatest integer less than or equal to the exact positive
square root of the argument.
For example:
@lisp
(isqrt 9) @EV 3
(isqrt 12) @EV 3
(isqrt 300) @EV 17
(isqrt 325) @EV 18
@endlisp
@Enddefun

@Subsection[Trigonometric and Related Functions]

Some of the functions in this section, such as @f[abs]
and @f[signum], are apparently unrelated
to trigonometric functions when considered as functions of
real numbers only.  The way in which they are extended to
operate on complex numbers makes the trigonometric connection clear.

@Defun[Fun {abs⎇, Args {@i[number]⎇]
Returns the absolute value of the argument.

For a non-complex number,
@Lisp
(abs x) @EQ (if (minusp x) (- x) x)
@Endlisp
and the result is always of the same type as the argument.

For a complex number @i[z], the absolute value may be computed as
@Lisp
(sqrt (+ (expt (realpart @i[z]) 2) (expt (imagpart @i[z]) 2)))
@Endlisp
@Implementation{The careful implementor will not use this formula directly
for all complex numbers
but will instead handle very large or very small components specially
to avoid intermediate overflow or underflow.⎇
For example:
@lisp
(abs #c(3.0 -4.0)) @EV 5.0
@endlisp
The result of @f[(abs #c(3 4))] may be either @f[5] or @f[5.0],
depending on the implementation.
@Enddefun

@Defun[Fun {phase⎇, Args {@i[number]⎇]
The phase of a number is the angle part of its polar representation
as a complex number.  That is,
@Lisp
(phase x) @EQ (atan (imagpart x) (realpart x))
@Endlisp
The result is in radians, in the range @Minussign@;@sail[π] (exclusive)
to @Sail[π] (inclusive).  The phase of a positive non-complex number
is zero; that of a negative non-complex number is @Sail[π].
The phase of zero is arbitrarily defined to be zero.

If the argument is a complex floating-point number, the result
is a floating-point number of the same type as the components of
the argument.
If the argument is a floating-point number, the result is a
floating-point number of the same type.
If the argument is a rational number or complex rational number, the result
is a single-format floating-point number.
@Enddefun

@Defun[Fun {signum⎇, Args {@i[number]⎇]
By definition,
@Lisp
(signum @i[x]) @EQ (if (zerop @i[x]) @i[x] (/ @i[x] (abs @i[x])))
@Endlisp
For a rational number, @f[signum] will return one of @f[-1], @f[0], or @f[1]
according to whether the number is negative, zero, or positive.
For a floating-point number, the result will be a floating-point number
of the same format whose value is minus one, zero, or one.
For a complex number @i[z], @f[(signum @i[z])] is a complex number of
the same phase but with unit magnitude, unless @i[z] is a complex zero,
in which case the result is @i[z].
For example:
@lisp
(signum 0) @EV 0
(signum -3.7L5) @EV -1.0L0
(signum 4/5) @EV 1
(signum #C(7.5 10.0)) @EV #C(0.6 0.8)
(signum #C(0.0 -14.7)) @EV #C(0.0 -1.0)
@endlisp
For non-complex rational numbers, @f[signum] is a rational function,
but it may be irrational for complex arguments.
@Enddefun

@Defun[Fun {sin⎇, Args {@i[radians]⎇]
@Defun1[Fun {cos⎇, Args {@i[radians]⎇]
@Defun1[Fun {tan⎇, Args {@i[radians]⎇]
@f[sin] returns the sine of the argument, @f[cos] the cosine,
and @f[tan] the tangent.  The argument is in radians.
The argument may be complex.
@Enddefun

@Defun[Fun {cis⎇, Args {@i[radians]⎇]
This computes @i[e]@+[@superi[i]@supercenterdot@superi[radians]].
The name @f[cis] means ``cos + @i[i] sin,'' because
@i[e]@+[@superi[i]@superg[q]] = cos @g[q] + @i[i] sin @g[q].
The argument is in
radians and may be any non-complex number.  The result is a complex
number whose real part is the cosine of the argument and whose imaginary
part is the sine.  Put another way, the result is a complex number whose
phase is the equal to the argument (mod 2@sail[π])
and whose magnitude is unity.
@Implementation{Often it is cheaper to calculate the sine and cosine
of a single angle together than to perform two disjoint calculations.⎇
@Enddefun

@Defun[Fun {asin⎇, Args {@i[number]⎇]
@Defun1[Fun {acos⎇, Args {@i[number]⎇]
@f[asin] returns the arc sine of the argument, and @f[acos] the arc cosine.
The result is in radians.  The argument may be complex.

The arc sine and arc cosine functions may be defined mathematically for
an argument @i[x] as follows:
@begin[format]
@Tabset[+2.5 in]
Arc sine@\@minussign@i[i] log (@i[i x]+@BigSqrt{1@minussign@;@i[x]@+[2]⎇)
Arc cosine@\@minussign@i[i] log (@i[x]+@i[i] @BigSqrt{1@minussign@;@i[x]@+[2]⎇)
@end[format]
Note that the result of either @f[asin] or @f[acos] may be
complex even if the argument is not complex; this occurs
when the absolute value of the argument is greater than one.

@Implementation{These formulae are mathematically correct, assuming
completely accurate computation.  They may be terrible methods for
floating-point computation!  Implementors should consult a good text on
numerical analysis.  The formulas given above are not necessarily
the simplest ones for real-valued computations, either; they are chosen
to define the branch cuts in desirable ways for the complex case.⎇
@Enddefun

@Defun[Fun {atan⎇, Args {@i[y] @optional @i[x]⎇]
An arc tangent is calculated and the result is returned in radians.

With two arguments @i[y] and @i[x], neither argument may be complex.
The result is the arc tangent of the quantity @i[y/x].
The signs of @i[y] and @i[x] are used to derive quadrant
information; moreover, @i[x] may be zero provided
@i[y] is not zero.  The value of @f[atan] is always between
@Minussign@;@Sail[π] (exclusive) and @Sail[π] (inclusive).
The following table details various special cases.
@Begin[Group]
@Begin[Format]
@Tabclear
@Tabset[+10,+7,+10,+20,+20]
@\@!@ux[Condition]@/@\@\@ux[Cartesian locus]@\@=@ux[Range of result]@\
@\@i[y] @Sail[=] 0@\@i[x] @Sail[>] 0@\Positive @i[x]-axis@\@=0@\
@\@i[y] @Sail[>] 0@\@i[x] @Sail[>] 0@\Quadrant I@\@=0 @Sail[<] result @Sail[<] @Sail[π]/2@\
@\@i[y] @Sail[>] 0@\@i[x] @Sail[=] 0@\Positive @i[y]-axis@\@=@Sail[π]/2@\
@\@i[y] @Sail[>] 0@\@i[x] @Sail[<] 0@\Quadrant II@\@=@Sail[π]/2 @Sail[<] result @Sail[<] @Sail[π]@\
@\@i[y] @Sail[=] 0@\@i[x] @Sail[<] 0@\Negative @i[x]-axis@\@=@Sail[π]@\
@\@i[y] @Sail[<] 0@\@i[x] @Sail[<] 0@\Quadrant III@\@=@Minussign@;@Sail[π] @Sail[<] result @Sail[<] @Minussign@;@Sail[π]/2@\
@\@i[y] @Sail[<] 0@\@i[x] @Sail[=] 0@\Negative @i[y]-axis@\@=@Minussign@;@Sail[π]/2@\
@\@i[y] @Sail[<] 0@\@i[x] @Sail[>] 0@\Quadrant IV@\@=@Minussign@;@Sail[π]/2 @Sail[<] result @Sail[<] 0@\
@\@i[y] @Sail[=] 0@\@i[x] @Sail[=] 0@\Origin@\@=error@\
@End[Format]
@End[Group]

With only one argument @i[y], the argument may be complex.
The result is the arc tangent of @i[y], which may be defined by
the following formula:
@begin[format]
@Tabset[+2.5 in]
Arc tangent@\@minussign@i[i] log ((1+@i[i] @i[y]) @BigSqrt{1/(1+@i[y]@+[2])⎇)
@end[format]
@Implementation{This formula is mathematically correct, assuming
completely accurate computation.  It may be a terrible method for
floating-point computation!  Implementors should consult a good text on
numerical analysis.  The formula given above is not necessarily
the simplest one for real-valued computations, either; it is chosen
to define the branch cuts in desirable ways for the complex case.⎇

For a non-complex argument @i[y], the result is non-complex and lies between
@Minussign@;@Sail[π]/2 and @Sail[π]/2 (both exclusive).

@Incompatibility{@maclisp has a function called @f[atan] whose
range is from 0 to 2@Sail[π].  Almost every other programming language
(ANSI @fortran, IBM @PL1, @InterLISP) has a two-argument arc tangent
function with range @Minussign@;@Sail[π] to @Sail[π].
@lmlisp provides two two-argument
arc tangent functions, @f[atan] (compatible with @maclisp)
and @f[atan2] (compatible with all others).

@clisp makes two-argument @f[atan] the standard one
with range @Minussign@;@Sail[π] to @Sail[π].  Observe that this makes
the one-argument and two-argument versions of @f[atan] compatible
in the sense that the branch cuts do not fall in different places.
The @interlisp one-argument function @f[arctan] has a range
from 0 to @Sail[π], while nearly every other programming language
provides the range @Minussign@;@Sail[π]/2 to @Sail[π]/2 for
one-argument arc tangent!
Nevertheless, since @interlisp uses the standard two-argument
version of arc tangent, its branch cuts are inconsistent anyway.⎇
@Enddefun

@Defcon[Var {pi⎇]
This global variable has as its value the best possible approximation to
@Sail[π] in @i[long] floating-point format.
For example:
@lisp
(defun sind (x)			;@r[The argument is in degrees.]
  (sin (* x (/ (float pi x) 180))))
@Endlisp
An approximation to @Sail[π] in some other precision can
be obtained by writing @f[(float pi @i[x])], where @i[x] is a
floating-point number of the desired precision,
or by writing @f[(coerce pi @i[type])], where @i[type] is the
name of the desired type, such as @f[short-float].
@Enddefvar


@Defun[Fun {sinh⎇, Args {@i[number]⎇]
@Defun1[Fun {cosh⎇, Args {@i[number]⎇]
@Defun1[Fun {tanh⎇, Args {@i[number]⎇]
@Defun1[Fun {asinh⎇, Args {@i[number]⎇]
@Defun1[Fun {acosh⎇, Args {@i[number]⎇]
@Defun1[Fun {atanh⎇, Args {@i[number]⎇]
These functions compute the hyperbolic sine, cosine, tangent,
arc sine, arc cosine, and arc tangent functions, which are mathematically
defined for an argument @i[x] as follows:
@begin[format]
@Tabset[+2.5 in]
Hyperbolic sine@\(@i[e]@+[@superi[x]]@Minussign@;@i[e]@+[@superMinussign@;@superi[x]])/2
Hyperbolic cosine@\(@i[e]@+[@superi[x]]+@i[e]@+[@Superminussign@;@superi[x]])/2
Hyperbolic tangent@\(@i[e]@+[@superi[x]]@minussign@;@i[e]@+[@Superminussign@;@superi[x]])/(@i[e]@+[@superi[x]]+@i[e]@+[@Superminussign@;@superi[x]])
Hyperbolic arc sine@\log (@i[x]+@BigSqrt{1+@i[x]@+[2]⎇)
Hyperbolic arc cosine@\log (@i[x]+(@i[x]+1)@sqrt{(@i[x]@minussign@;1)/(@i[x]+1)⎇)
Hyperbolic arc tangent@\log ((1+@i[x])@BigSqrt{1@minussign@;1/@i[x]@+[2]⎇)
@end[format]
Note that the result of @f[acosh] may be
complex even if the argument is not complex; this occurs
when the argument is less than one.
Also, the result of @f[atanh] may be
complex even if the argument is not complex; this occurs
when the absolute value of the argument is greater than one.

@Implementation{These formulae are mathematically correct, assuming
completely accurate computation.  They may be terrible methods for
floating-point computation!  Implementors should consult a good text on
numerical analysis.  The formulas given above are not necessarily
the simplest ones for real-valued computations, either; they are chosen
to define the branch cuts in desirable ways for the complex case.⎇
@Enddefun

@Subsection[Branch Cuts, Principal Values, and Boundary Conditions in the Complex Plane]

Many of the irrational and transcendental functions are multiply defined
in the complex domain; for example, there are in general an infinite
number of complex values for the logarithm function.  In each such
case, a principal value must be chosen for the function to return.
In general, such values cannot be chosen so as to make the range
continuous; lines in the domain
called @i[branch cuts] must be defined, which in turn
define the discontinuities in the range.

@clisp defines the branch cuts, principal values, and boundary
conditions for the complex functions following
a proposal for complex functions in @apl @Cite[APL-BRANCH-CUTS].
The contents of this section are borrowed largely from that proposal.

@Incompatibility{The branch cuts defined here differ in a few very minor
respects from those advanced by W. Kahan, who considers not only the
``usual'' definitions but also the special modifications necessary for
@c[ieee] proposed floating-point arithmetic, which has infinities and
minus zero as explicit computational objects.  For example, he proposes
that @sqrt{@minussign@;4+0@i[i]⎇=2@i[i], but
@sqrt{@minussign@;4@minussign@;0@i[i]⎇=@minussign@;2@i[i].

It may be that the differences between the @apl proposal and Kahan's
proposal will be ironed out.  If so, @clisp may be
changed as necessary to be compatible with these other groups.  Any changes
from the specification below are likely to be quite minor,
probably concerning primarily questions of which side of a branch cut
is continuous with the cut itself.⎇

@Begin[Description]
@f[sqrt]@\The branch cut for square root lies along the negative real axis,
continuous with quadrant II.
The range consists of the right half-plane, including the non-negative
imaginary axis and excluding the negative imaginary axis.

@f[phase]@\The branch cut for the phase function lies along the negative real
axis, continuous with quadrant II.  The range consists of that portion of
the real axis between @minussign@Sail[π] (exclusive) and @Sail[π]
(inclusive).

@Begin[Multiple]
@f[log]@\The branch cut for the logarithm function of one argument (natural
logarithm) lies along the negative real axis, continuous with quadrant II.
The domain excludes the origin.  For a complex number @i[z],
log @i[z] is defined to be (log |@i[z]|)+@i[i] @i[phase](@i[z]).
Therefore the range of the one-argument logarithm function is that strip
of the complex plane containing numbers with imaginary parts between
@minussign@Sail[π] (exclusive) and @Sail[π] (inclusive).

The two-argument logarithm function is defined as log@-[@subi[b]] @i[z]=(log @i[z])/(log @i[b]).
This defines the principal values precisely.  The range of the two-argument
logarithm function is the entire complex plane.
It is an error if @i[z] is zero.  If @i[z] is non-zero and @i[b] is zero,
the logarithm is taken to be zero.
@End[Multiple]

@f[exp]@\The simple exponential function has no branch cut.

@Begin[Multiple]
@f[expt]@\The two-argument exponential function is defined
as @i[b]@+[@superi[x]]=@i[e]@+[@superi[x] log @superi[b]].
This defines the principal values precisely.  The range of the
two-argument exponential function is the entire complex plane.  Regarded
as a function of @i[x], with @i[b] fixed, there is no branch cut.
Regarded as a function of @i[b], with @i[x] fixed, there is in general
a branch cut along the negative real axis, continuous with quadrant II.
The domain excludes the origin.
By definition, 0@+[0]=1.  If @i[b]=0 and the real part of @i[x] is strictly
positive, then @i[b]@+[@superi[x]]=0.  For all other values of @i[x], 0@+[@superi[x]]
is an error.
@End[Multiple]

@Begin[Multiple]
@f[asin]@\The following definition for arc sine determines the range and
branch cuts:
@Begin[Format]
arcsin @i[z]=@minussign@i[i] log (@i[i z]+@BigSqrt{1@minussign@;@i[z]@+[2]⎇)
@End[Format]
The branch cut for the arc sine function is in two pieces:
one along the negative real axis to the left of @minussign@;1
(inclusive), continuous with quadrant II, and one along the positive real
axis to the right of 1 (inclusive), continuous with quadrant IV.  The
range is that strip of the complex plane containing numbers whose real
part is between @minussign@Sail[π]/2 and @Sail[π]/2.  A number with real
part equal to @minussign@Sail[π]/2 is in the range if and only if its imaginary
part is non-negative; a number with real part equal to @Sail[π]/2 is in
the range if and only if its imaginary part is non-positive.
@End[Multiple]

@Begin[Multiple]
@f[acos]@\The following definition for arc cosine determines the range and
branch cuts:
@Begin[Format]
arccos @i[z]=@minussign@i[i] log (@i[z]+@i[i] @BigSqrt{1@minussign@;@i[z]@+[2]⎇)
@End[Format]
or, which is equivalent,
@Begin[Format]
arccos @i[z]=(@Sail[π]/2)@minussign@;arcsin @i[z]
@End[Format]
The branch cut for the arc cosine function is in two pieces:
one along the negative real axis to the left of @minussign@;1
(inclusive), continuous with quadrant II, and one along the positive real
axis to the right of 1 (inclusive), continuous with quadrant IV.  
This is the same branch cut as for arc sine.
The range is that strip of the complex plane containing numbers whose real
part is between 0 and @Sail[π].  A number with real
part equal to 0 is in the range if and only if its imaginary
part is non-negative; a number with real part equal to @Sail[π] is in
the range if and only if its imaginary part is non-positive.
@End[Multiple]

@Begin[Multiple]
@f[atan]@\The following definition for (one-argument) arc tangent determines the
range and branch cuts:
@Begin[Format]
arctan @i[z]=@minussign@i[i] log ((1+@i[i] @i[z]) @BigSqrt{1/(1+@i[z]@+[2])⎇)
@End[Format]
Beware of simplifying this formula; ``obvious'' simplifications are likely
to alter the branch cuts or the values on the branch cuts incorrectly.
The branch cut for the arc tangent function is in two pieces:
one along the positive imaginary axis above @i[i]
(exclusive), continuous with quadrant II, and one along the negative imaginary
axis below @minussign@;@i[i] (exclusive), continuous with quadrant IV.  
The points @i[i] and @minussign@;@i[i] are excluded from the domain.
The range is that strip of the complex plane containing numbers whose real
part is between @minussign@Sail[π]/2 and @Sail[π]/2.  A number with real
part equal to @minussign@Sail[π]/2 is in the range if and only if its imaginary
part is strictly positive; a number with real part equal to @Sail[π]/2 is in
the range if and only if its imaginary part is strictly negative.  Thus the range of
arc tangent is identical to that of arc sine with the points
@minussign@Sail[π]/2 and @Sail[π]/2 excluded.
@End[Multiple]

@Begin[Multiple]
@f[asinh]@\The following definition for the inverse hyperbolic sine determines
the range and branch cuts:
@Begin[Format]
arcsinh @i[z]=log (@i[z]+@BigSqrt{1+@i[z]@+[2]⎇)
@End[Format]
The branch cut for the inverse hyperbolic sine function is in two pieces:
one along the positive imaginary axis above @i[i]
(inclusive), continuous with quadrant I, and one along the negative imaginary
axis below @minussign@;@i[i] (inclusive), continuous with quadrant III.
The range is that strip of the complex plane containing numbers whose imaginary
part is between @minussign@Sail[π]/2 and @Sail[π]/2.  A number with imaginary
part equal to @minussign@Sail[π]/2 is in the range if and only if its real
part is non-positive; a number with imaginary part equal to @Sail[π]/2 is in
the range if and only if its imaginary part is non-negative.
@End[Multiple]

@Begin[Multiple]
@f[acosh]@\The following definition for the inverse hyperbolic cosine
determines the range and branch cuts:
@Begin[Format]
arccosh @i[z]=log (@i[z]+(@i[z]+1)@sqrt{(@i[z]@minussign@;1)/(@i[z]+1)⎇)
@End[Format]
The branch cut for the inverse hyperbolic cosine function
lies along the real axis to the left of 1 (inclusive), extending
indefinitely along the negative real axis, continuous with quadrant II
and (between 0 and 1) with quadrant I.
The range is that half-strip of the complex plane containing numbers whose
real part is non-negative and whose imaginary
part is between @minussign@Sail[π] (exclusive) and @Sail[π] (inclusive).
A number with real part zero is in the range 
if its imaginary part is between zero (inclusive) and @Sail[π] (inclusive).
@End[Multiple]

@Begin[Multiple]
@f[atanh]@\The following definition for the inverse hyperbolic tangent
determines the range and branch cuts:
@Begin[Format]
arctanh @i[z]=log ((1+@i[z])@BigSqrt{1@minussign@;1/@i[z]@+[2]⎇)
@End[Format]
Beware of simplifying this formula; ``obvious'' simplifications are
likely to alter the branch cuts or the values on the branch cuts
incorrectly.  The branch cut for the inverse hyperbolic tangent function
is in two pieces: one along the negative real axis to the left of
@minussign@;1 (inclusive), continuous with quadrant III, and one along
the positive real axis to the right of 1 (inclusive), continuous with
quadrant I.  The points @minussign@;1 and 1 are excluded from the
domain.
The range is that strip of the complex plane containing
numbers whose imaginary part is between @minussign@Sail[π]/2 and
@Sail[π]/2.  A number with imaginary part equal to @minussign@Sail[π]/2
is in the range if and only if its real part is strictly negative; a number with
imaginary part equal to @Sail[π]/2 is in the range if and only if its imaginary
part is strictly positive.  Thus the range of the inverse
hyperbolic tangent function is identical to
that of the inverse hyperbolic sine function with the points
@minussign@Sail[π]@i[i]/2 and @Sail[π]@i[i]/2 excluded.
@End[Multiple]
@End[Description]

With these definitions, the following useful identities are obeyed
throughout the applicable portion of the complex domain, even on
the branch cuts:
@Begin[Format,Spread 0.5]
@Tabdivide[3]
@=sin @i[i] @i[z] = @i[i] sinh @i[z]@\@=sinh @i[i] @i[z] = @i[i] sin @i[z]@\@=arctan @i[i] @i[z] = @i[i] arctanh @i[z]@\
@=cos @i[i] @i[z] = cosh @i[z]@\@=cosh @i[i] @i[z] = cos @i[z]@\@=arcsinh @i[i] @i[z] = @i[i] arcsin @i[z]@\
@=tan @i[i] @i[z] = @i[i] tanh @i[z]@\@=arcsin @i[i] @i[z] = @i[i] arcsinh @i[z]@\@=arctanh @i[i] @i[z] = @i[i] arctan @i[z]@\
@End[Format]

@Section[Type Conversions and Component Extractions on Numbers]

While most arithmetic functions will operate on any kind of number,
coercing types if necessary, the following functions are provided to
allow specific conversions of data types to be forced when desired.

@Defun[Fun {float⎇, Args {@i[number] @optional @i[other]⎇]
This converts any non-complex number to a floating-point number.
With no second argument, if @i[number] is already a floating-point
number, then @i[number] is returned;
otherwise a @f[single-float] is produced.
If the argument @i[other] is provided, then it must be a floating-point
number, and @i[number] is converted to the same format as @i[other].
See also @Funref[coerce].
@Enddefun

@Defun[Fun {rational⎇, Args {@i[number]⎇]
@Defun1[Fun {rationalize⎇, Args {@i[number]⎇]
Each of these functions converts any non-complex number to be a rational
number.  If the argument is already rational, it is returned.
The two functions differ in their treatment of floating-point numbers.

@f[rational] assumes that the floating-point number is completely accurate
and returns a rational number mathematically equal to the precise
value of the floating-point number.

@f[rationalize] assumes that the
floating-point number is accurate only to the precision of the
floating-point representation, and may return any rational number for
which the floating-point number is the best available approximation of
its format; in doing this it attempts to keep both numerator and
denominator small.

It is always the case that
@Lisp
(float (rational x) x) @EQ x
@endlisp
and
@lisp
(float (rationalize x) x) @EQ x
@Endlisp
That is, rationalizing a floating-point number by either method
and then converting it back
to a floating-point number of the same format produces the original number.
What distinguishes the two functions is that @f[rational] typically
has a simple, inexpensive implementation, whereas @f[rationalize] goes
to more trouble to produce a result that is more pleasant to view and
simpler for some purposes to compute with.
@Enddefun

@Defun[Fun {numerator⎇, Args {@i[rational]⎇]
@Defun1[Fun {denominator⎇, Args {@i[rational]⎇]
These functions take a rational number (an integer or ratio)
and return as an integer the numerator or denominator of the canonical
reduced form of the rational.  The numerator of an integer is that integer,
and the denominator of an integer is @f[1].  Note that
@Lisp
(gcd (numerator @i[x]) (denominator @i[x])) @EV 1
@Endlisp
The denominator will always be a strictly positive integer;
the numerator may be any integer.
For example:
@lisp
(numerator (/ 8 -6)) @EV -4
(denominator (/ 8 -6)) @EV 3
@Endlisp
@Enddefun

There is no @f[fix] function in @clisp because there are several
interesting ways to convert non-integral values to integers.
These are provided by the functions below, which perform not only
type-conversion but also some non-trivial calculations.

@Defun[Fun {floor⎇, Args {@i[number] @optional @i[divisor]⎇]
@Defun1[Fun {ceiling⎇, Args {@i[number] @optional @i[divisor]⎇]
@Defun1[Fun {truncate⎇, Args {@i[number] @optional @i[divisor]⎇]
@Defun1[Fun {round⎇, Args {@i[number] @optional @i[divisor]⎇]
In the simple one-argument case,
each of these functions converts its argument @i[number]
(which must not be complex) to be an integer.
If the argument is already an integer, it is returned directly.
If the argument is a ratio or floating-point number, the functions use
different algorithms for the conversion.

@f[floor] converts its argument by truncating toward negative
infinity; that is, the result is the largest integer that is not larger
than the argument.

@f[ceiling] converts its argument by truncating toward positive
infinity; that is, the result is the smallest integer that is not smaller
than the argument.

@f[truncate] converts its argument by truncating toward zero;
that is, the result is the integer of the same sign as the argument
and which has the greatest integral
magnitude not greater than that of the argument.

@f[round] converts its argument by rounding to the nearest
integer; if @i[number] is exactly halfway between two integers
(that is, of the form @i[integer]+0.5), then it is rounded to the one that
is even (divisible by two).

The following table shows what the four functions produce when given
various arguments.
@Begin[Group]
@Begin[Verbatim]
@Tabclear
@Tabdivide[5]
@u[@r[Argument]@\floor@\ceiling@\truncate@\round]
 2.6@\  2@\  3@\  2@\  3
 2.5@\  2@\  3@\  2@\  2
 2.4@\  2@\  3@\  2@\  2
 0.7@\  0@\  1@\  0@\  1
 0.3@\  0@\  1@\  0@\  0
-0.3@\ -1@\  0@\  0@\  0
-0.7@\ -1@\  0@\  0@\ -1
-2.4@\ -3@\ -2@\ -2@\ -2
-2.5@\ -3@\ -2@\ -2@\ -2
-2.6@\ -3@\ -2@\ -2@\ -3
@End[Verbatim]
@End[Group]
If a second argument @i[divisor] is supplied, then the result
is the appropriate type of rounding or truncation applied to the
result of dividing the @i[number] by the @i[divisor].
For example, @f[(floor 5 2)] = @f[(floor (/ 5 2))] but is potentially more
efficient.  The @i[divisor] may be any non-complex number.
The one-argument case is exactly like the two-argument case where the second
argument is @f[1].

@Index2[P {Multiple values⎇, S {returned by @f[floor]⎇]
@Index2[P {Multiple values⎇, S {returned by @f[ceiling]⎇]
@Index2[P {Multiple values⎇, S {returned by @f[truncate]⎇]
@Index2[P {Multiple values⎇, S {returned by @f[round]⎇]
Each of the functions actually returns @i[two] values,
whether given one or two arguments.  The second
result is the remainder and may be obtained using
@Macref[multiple-value-bind] and related constructs.
If any of these functions is given two arguments @i[x] and @i[y]
and produces results @i[q] and @i[r], then @i[q]@centerdot@i[y]+@i[r]=@i[x].
The first result @i[q] is always an integer.
The remainder @i[r] is an integer if both arguments are integers,
is rational if both arguments are rational,
and is floating-point if either argument is floating-point.
One consequence is that
in the one-argument case the remainder is always a number of the same type
as the argument.

When only one argument is given, the two results are exact;
the mathematical sum of the two results is always equal to the
mathematical value of the argument.

@Incompatibility{The names of the functions @f[floor], @f[ceiling],
@f[truncate], and @f[round] are more accurate than names like @f[fix]
that have heretofore been used in various @xlisp systems.
The names used here are compatible with standard mathematical
terminology (and with @pl1, as it happens).  In @fortran
@f[ifix] means @f[truncate].  @algol 68 provides @f[round]
and uses @f[entier] to mean @f[floor].
In @maclisp, @f[fix] and @f[ifix] both
mean @f[floor] (one is generic, the other flonum-in/fixnum-out).
In @interlisp, @f[fix] means @f[truncate].
In @lmlisp, @f[fix] means @f[floor] and @f[fixr] means @f[round].
@stdlisp provides a @f[fix] function but does not
specify precisely what it does.  The existing usage
of the name @f[fix] is so confused that it seemed best to avoid it
altogether.

The names and definitions given here have recently been adopted
by @lmlisp, and @maclisp and @newlisp seem likely to follow suit.⎇
@Enddefun

@Defun[Fun {mod⎇, Args {@i[number] @i[divisor]⎇]
@Defun1[Fun {rem⎇, Args {@i[number] @i[divisor]⎇]
@f[mod] performs the operation @Funref[floor] on its two arguments
and returns the @i[second] result of @f[floor] as its only result.
Similarly,
@f[rem] performs the operation @Funref[truncate] on its arguments
and returns the @i[second] result of @f[truncate] as its only result.

@f[mod] and @f[rem] are therefore the usual modulus
and remainder functions when applied to two integer arguments.
In general, however, the arguments may be integers or floating-point
numbers.
@Lisp
@Tabclear
@Tabdivide[2]
(mod 13 4) @EV 1@\(rem 13 4) @EV 1
(mod -13 4) @EV 3@\(rem -13 4) @EV -1
(mod 13 -4) @EV -3@\(rem 13 -4) @EV 1
(mod -13 -4) @EV -1@\(rem -13 -4) @EV -1
(mod 13.4 1) @EV 0.4@\(rem 13.4 1) @EV 0.4
(mod -13.4 1) @EV 0.6@\(rem -13.4 1) @EV -0.4
@Endlisp
@Incompatibility{The @interlisp function @f[remainder] is essentially
equivalent to the @clisp function @f[rem].  The @maclisp function @f[remainder]
is like @f[rem] but accepts only integer arguments.⎇
@Enddefun

@Defun[Fun {ffloor⎇, Args {@i[number] @optional @i[divisor]⎇]
@Defun1[Fun {fceiling⎇, Args {@i[number] @optional @i[divisor]⎇]
@Defun1[Fun {ftruncate⎇, Args {@i[number] @optional @i[divisor]⎇]
@Defun1[Fun {fround⎇, Args {@i[number] @optional @i[divisor]⎇]
These functions are just like @f[floor], @f[ceiling], @f[truncate], and
@f[round], except that the result (the first result of two) is always a
floating-point number rather than an integer.  It is roughly as if
@f[ffloor] gave its arguments to @f[floor], and then applied @f[float] to
the first result before passing them both back.  In practice, however,
@f[ffloor] may be implemented much more efficiently.  Similar remarks
apply to the other three functions.  If the first argument is a
floating-point number, and the second argument is not a floating-point
number of shorter format, then the first result will be a floating-point
number of the same type as the first argument.
For example:
@lisp
(ffloor -4.7) @EV -5.0 and 0.3
(ffloor 3.5d0) @EV 3.0d0 and 0.5d0
@Endlisp
@Index2[P {Multiple values⎇, S {returned by @f[ffloor]⎇]
@Index2[P {Multiple values⎇, S {returned by @f[fceiling]⎇]
@Index2[P {Multiple values⎇, S {returned by @f[ftruncate]⎇]
@Index2[P {Multiple values⎇, S {returned by @f[fround]⎇]
@Enddefun

@Defun[Fun {decode-float⎇, Args {@i[float]⎇]
@Defun1[Fun {scale-float⎇, Args {@i[float] @i[integer]⎇]
@Defun1[Fun {float-radix⎇, Args {@i[float]⎇]
@Defun1[Fun {float-sign⎇, Args {@i[float1] @optional @i[float2]⎇]
@Defun1[Fun {float-digits⎇, Args {@i[float]⎇]
@Defun1[Fun {float-precision⎇, Args {@i[float]⎇]
@Defun1[Fun {integer-decode-float⎇, Args {@i[float]⎇]
The function @f[decode-float] takes a floating-point number
and returns three values.

@Index2[P {Multiple values⎇, S {returned by @f[decode-float]⎇]
The first value is a new floating-point number of the same format
representing the significand; the second value is an integer
representing the exponent; and the third value is a floating-point
number of the same format indicating the sign.
Let @i[b] be the radix for the floating-point representation;
then @f[decode-float] divides the argument by an integral power of @i[b]
so as to bring its value between 1/@i[b] (inclusive) and 1 (exclusive),
and returns the quotient as the first value.
If the argument is zero, however, the result
equals the absolute value of the argument (that is, if there is a negative
zero, its significand is considered to be a positive zero).

The second value of @f[decode-float] is
the integer exponent @i[e] to which @i[b] must be raised
to produce the appropriate power for the division.
If the argument is zero, any integer value may be returned, provided
that the identity shown below for @f[scale-float] holds.

The third value of @f[decode-float] is a floating-point number,
of the same format as the argument, whose absolute value is one
and whose sign matches that of the argument.

The function @f[scale-float] takes a floating-point number @i[f]
(not necessarily between 1/@i[b] and 1) and
an integer @i[k], and returns @f[(* @i[f] (expt (float @i[b] @i[f]) @i[k]))].
(The use of @f[scale-float] may be much more efficient than using
exponentiation and multiplication, and avoids intermediate
overflow and underflow if the final result is representable.)

Note that
@lisp
(multiple-value-bind (signif expon sign)
		     (decode-float @i[f])
  (scale-float signif expon))
@EQ (abs @i[f])
@endlisp
and
@lisp
(multiple-value-bind (signif expon sign)
		     (decode-float @i[f])
  (* (scale-float signif expon) sign))
@EQ @i[f]
@endlisp

The function @f[float-radix] returns (as an integer)
the radix @i[b] of the floating-point argument.

The function @f[float-sign] returns a floating-point number @i[z] such
that @i[z] and @i[float1] have the same sign and also such that
@i[z] and @i[float2] have the same absolute value.
The argument @i[float2] defaults to the value of @f[(float 1 @i[float1])];
@f[(float-sign x)] therefore always produces a @f[1.0] or @f[-1.0]
of appropriate format
according to the sign of @f[x].  (Note that if an implementation
has distinct representations for negative zero and positive zero,
then @f[(float-sign -0.0)] @ev @f[-1.0].)

The function @f[float-digits] returns, as a non-negative integer,
the number of radix-@i[b] digits
used in the representation of its argument (including any implicit
digits, such as a ``hidden bit'').
The function @f[float-precision]
returns, as a non-negative integer,
the number of significant radix-@i[b] digits present in the
argument; if the argument is (a floating-point)
zero, then the result is (an integer) zero.
For normalized floating-point numbers, the results of @f[float-digits]
and @f[float-precision]
will be the same, but the precision will be less than the
number of representation digits for a denormalized or zero number.

The function @f[integer-decode-float] is similar to @f[decode-float]
but for its first value returns,
as an @f[integer], the significand scaled so as to be an integer.
For an argument @f[f], this integer will be strictly less than
@lisp
@f[(expt @i[b] (float-precision @i[f]))]
@endlisp
but no less than
@lisp
@f[(expt @i[b] (- (float-precision @i[f]) 1))]
@endlisp
except that if @i[f] is zero, then the integer value will be zero.

@Index2[P {Multiple values⎇, S {returned by @f[integer-decode-float]⎇]
The second value bears the same relationship to the first value
as for @f[decode-float]:
@lisp
(multiple-value-bind (signif expon sign)
		     (integer-decode-float @i[f])
  (scale-float (float signif @i[f]) expon))
@EQ (abs @i[f])
@endlisp

@Rationale{These functions allow the writing of machine-independent,
or at least machine-parameterized, floating-point software of reasonable
efficiency.⎇
@Enddefun

@Defun[Fun {complex⎇, Args {@i[realpart] @optional @i[imagpart]⎇]
The arguments must be non-complex numbers; a number is returned
that has @i[realpart] as its real part and @i[imagpart] as its imaginary
part, possibly converted according to the rule of floating-point
contagion (thus both components will be of the same type).
If @i[imagpart] is not specified,
then @f[(coerce 0 (type-of @i[realpart]))] is
effectively used.  Note that if both the @i[realpart] and @i[imagpart] are
rational and the @i[imagpart] is zero, then the result is just the
@i[realpart] because of the rule of canonical representation
for complex rationals.  It follows that the result of @f[complex]
is not always a complex number; it may be simply a @f[rational].
@Enddefun

@Defun[Fun {realpart⎇, Args {@i[number]⎇]
@Defun1[Fun {imagpart⎇, Args {@i[number]⎇]
These return the real and imaginary parts of a complex number.  If
@i[number] is a non-complex number, then @f[realpart] returns its
argument @i[number] and @f[imagpart]
returns @f[(* 0 @i[number])], which
has the effect that the imaginary part of a rational is @f[0] and that of
a floating-point number is a floating-point zero of the same format.
@Enddefun


@Section[Logical Operations on Numbers]

The logical operations in this section require integers
as arguments; it is an error to supply a non-integer as an argument.
The functions all treat integers as if
they were represented in two's-complement notation.

@Implementation{Internally, of course, an implementation of
@clisp may or may not use a two's-complement representation.
All that is necessary is that the logical operations
perform calculations so as to give this appearance to the user.⎇

The logical operations provide a convenient way to represent
an infinite vector of bits.  Let such a conceptual vector be
indexed by the non-negative integers.  Then bit @i[j] is assigned
a ``weight'' 2@+[@superi[j]].
Assume that only a finite number of bits are ones
or only a finite number of bits are zeros.
A vector with only a finite number of one-bits is represented
as the sum of the weights of the one-bits, a positive integer.
A vector with only a finite number of zero-bits is represented
as @f[-1] minus the sum of the weights of the zero-bits, a negative integer.

@Index2[P {sets⎇, S {bit-vector representation⎇]
@Index2[P {sets⎇, S {integer representation⎇]
@Index2[P {sets⎇, S {infinite⎇]
This method of using integers to represent bit-vectors can in turn
be used to represent sets.  Suppose that some (possibly countably
infinite) universe of discourse
for sets is mapped into the non-negative integers.
Then a set can be represented as a bit vector; an element is in the
set if the bit whose index corresponds to that element is a one-bit.
In this way all finite sets can be represented (by positive
integers), as well as all sets whose complements are finite
(by negative integers).  The functions @f[logior], @f[logand],
and @f[logxor] defined below then compute the union,
intersection, and symmetric difference operations on sets
represented in this way.

@Defun[Fun {logior⎇, Args {@rest @i[integers]⎇]
This returns the bit-wise logical @i[inclusive or] of its arguments.
If no argument is given, then the result is zero,
which is an identity for this operation.
@Enddefun

@Defun[Fun {logxor⎇, Args {@rest @i[integers]⎇]
This returns the bit-wise logical @i[exclusive or] of its arguments.
If no argument is given, then the result is zero,
which is an identity for this operation.
@Enddefun

@Defun[Fun {logand⎇, Args {@rest @i[integers]⎇]
This returns the bit-wise logical @i[and] of its arguments.
If no argument is given, then the result is @f[-1],
which is an identity for this operation.
@Enddefun

@Defun[Fun {logeqv⎇, Args {@rest @i[integers]⎇]
This returns the bit-wise logical @i[equivalence] (also known as @i[exclusive nor])
of its arguments.
If no argument is given, then the result is @f[-1],
which is an identity for this operation.
@Enddefun

@Defun[Fun {lognand⎇, Args {@i[integer1] @i[integer2]⎇]
@Defun1[Fun {lognor⎇, Args {@i[integer1] @i[integer2]⎇]
@Defun1[Fun {logandc1⎇, Args {@i[integer1] @i[integer2]⎇]
@Defun1[Fun {logandc2⎇, Args {@i[integer1] @i[integer2]⎇]
@Defun1[Fun {logorc1⎇, Args {@i[integer1] @i[integer2]⎇]
@Defun1[Fun {logorc2⎇, Args {@i[integer1] @i[integer2]⎇]
These are the other six non-trivial bit-wise logical operations
on two arguments.  Because they are not associative,
they take exactly two arguments rather than any non-negative number
of arguments.
@Lisp
@Tabset[+2.5in]
@>(lognand @i[n1] @i[n2]) @EQ @\(lognot (logand @i[n1] @i[n2]))
@>(lognor @i[n1] @i[n2]) @EQ @\(lognot (logior @i[n1] @i[n2]))
@>(logandc1 @i[n1] @i[n2]) @EQ @\(logand (lognot @i[n1]) @i[n2])
@>(logandc2 @i[n1] @i[n2]) @EQ @\(logand @i[n1] (lognot @i[n2]))
@>(logiorc1 @i[n1] @i[n2]) @EQ @\(logior (lognot @i[n1]) @i[n2])
@>(logiorc2 @i[n1] @i[n2]) @EQ @\(logior @i[n1] (lognot @i[n2]))
@Endlisp
@Enddefun

The ten bit-wise logical operations on two integers are summarized
in this table:
@Lisp
@Tabset[+.5in, +1.1 in, +.4 in, +.4 in, +.4 in, +.4 in]
@Mline[]
@\@>@i[Argument 1]@f[  ]@\0@\0@\1@\1
@\@ux[@>@i[Argument 2]@f[  ]@\0@\1@\0@\1@\@i[Operation name]]
@\logand@\0@\0@\0@\1@\@r[and]
@\logior@\0@\1@\1@\1@\@r[inclusive or]
@\logxor@\0@\1@\1@\0@\@r[exclusive or]
@\logeqv@\1@\0@\0@\1@\@r[equivalence (exclusive nor)]
@\lognand@\1@\1@\1@\0@\@r[not-and]
@\lognor@\1@\0@\0@\0@\@r[not-or]
@\logandc1@\0@\1@\0@\0@\@r[and complement of arg1 with arg2]
@\logandc2@\0@\0@\1@\0@\@r[and arg1 with complement of arg2]
@\logorc1@\1@\1@\0@\1@\@r[or complement of arg1 with arg2]
@\logorc2@\1@\0@\1@\1@\@r[or arg1 with complement of arg2]
@Mline[]
@Endlisp


@Defun[Fun {boole⎇, Args {@i[op] @i[integer1] @i[integer2]⎇]
@Defcon1[Var {boole-clr⎇]
@Defcon1[Var {boole-set⎇]
@Defcon1[Var {boole-1⎇]
@Defcon1[Var {boole-2⎇]
@Defcon1[Var {boole-c1⎇]
@Defcon1[Var {boole-c2⎇]
@Defcon1[Var {boole-and⎇]
@Defcon1[Var {boole-ior⎇]
@Defcon1[Var {boole-xor⎇]
@Defcon1[Var {boole-eqv⎇]
@Defcon1[Var {boole-nand⎇]
@Defcon1[Var {boole-nor⎇]
@Defcon1[Var {boole-andc1⎇]
@Defcon1[Var {boole-andc2⎇]
@Defcon1[Var {boole-orc1⎇]
@Defcon1[Var {boole-orc2⎇]
The function @f[boole] takes an operation @i[op] and two integers,
and returns an integer produced by performing the logical operation
specified by @i[op] on the two integers.  The precise values of
the sixteen constants are implementation-dependent, but they are
suitable for use as the first argument to @f[boole]:
@Lisp
@Tabset[+1.2 in, +.3 in, +.3 in, +.3 in, +.3 in]
@Mline[]
@>@i[integer1]@f[  ]@\0@\0@\1@\1
@ux[@>@i[integer2]@f[  ]@\0@\1@\0@\1@\@i[Operation performed]]
boole-clr@\0@\0@\0@\0@\@r[always 0]
boole-set@\1@\1@\1@\1@\@r[always 1]
boole-1@\0@\0@\1@\1@\@i[integer1]
boole-2@\0@\1@\0@\1@\@i[integer2]
boole-c1@\1@\1@\0@\0@\@r[complement of @i[integer1]]
boole-c2@\1@\0@\1@\0@\@r[complement of @i[integer2]]
boole-and@\0@\0@\0@\1@\@r[and]
boole-ior@\0@\1@\1@\1@\@r[inclusive or]
boole-xor@\0@\1@\1@\0@\@r[exclusive or]
boole-eqv@\1@\0@\0@\1@\@r[equivalence (exclusive nor)]
boole-nand@\1@\1@\1@\0@\@r[not-and]
boole-nor@\1@\0@\0@\0@\@r[not-or]
boole-andc1@\0@\1@\0@\0@\@r[and complement of @i[integer1] with @i[integer2]]
boole-andc2@\0@\0@\1@\0@\@r[and @i[integer1] with complement of @i[integer2]]
boole-orc1@\1@\1@\0@\1@\@r[or complement of @i[integer1] with @i[integer2]]
boole-orc2@\1@\0@\1@\1@\@r[or @i[integer1] with complement of @i[integer2]]
@Mline[]
@Endlisp
@f[boole] can therefore compute all sixteen logical functions on two
arguments.  In general,
@Lisp
(boole boole-and x y) @EQ (logand x y)
@Endlisp
and the latter is more perspicuous.  However, @f[boole] is useful when it
is necessary to parameterize a procedure so that it can use
one of several logical operations.
@Enddefun

@Defun[Fun {lognot⎇, Args {@i[integer]⎇]
This returns the bit-wise logical @i[not] of its argument.
Every bit of the result is the complement of the corresponding bit
in the argument.
@Lisp
(logbitp @i[j] (lognot @i[x])) @EQ (not (logbitp @i[j] @i[x]))
@Endlisp
@Enddefun

@Defun[Fun {logtest⎇, Args {@i[integer1] @i[integer2]⎇]
@f[logtest] is a predicate that is true if any of
the bits designated by the 1's in @i[integer1] are 1's in @i[integer2].
@Lisp
(logtest @i[x] @i[y]) @EQ (not (zerop (logand @i[x] @i[y])))
@Endlisp
@Enddefun

@Defun[Fun {logbitp⎇, Args {@i[index] @i[integer]⎇]
@f[logbitp] is true if the bit in @i[integer] whose index
is @i[index] (that is, its weight is 2@+[@superi[index]]) is a one-bit;
otherwise it is false.
For example:
@lisp
(logbitp 2 6) @r[is true]
(logbitp 0 6) @r[is false]
(logbitp @i[k] @i[n]) @EQ (ldb-test (byte 1 @i[k]) @i[n])
@Endlisp
@Enddefun

@Defun[Fun {ash⎇, Args {@i[integer] @i[count]⎇]
This function shifts @i[integer] arithmetically left by @i[count] bit
positions if @i[count] is positive,
or right @f[-@i[count]] bit positions if @i[count] is negative.
The sign of the result is always the same as the sign of @i[integer].

Mathematically speaking, this operation performs the computation
@i[floor](@i[integer]@centerdot@;2@+[@superi[count]]).

Logically, this moves all of the bits in @i[integer] to the left,
adding zero-bits at the bottom, or moves them to the right,
discarding bits.  (In this context the question of what gets shifted
in on the left is irrelevant; integers, viewed as strings of bits,
are ``half-infinite,'' that is, conceptually extend infinitely far to the left.)
For example:
@lisp
(logbitp @i[j] (ash @i[n] @i[k]))
   @EQ (and (>= @i[j] @i[k]) (logbitp (- @i[j] @i[k]) @i[n]))
@Endlisp
@Enddefun

@Defun[Fun {logcount⎇, Args {@i[integer]⎇]
The number of bits in @i[integer] is determined and returned.
If @i[integer] is positive, then @f[1] bits in its binary
representation are counted.  If @i[integer] is negative, then
the @f[0] bits in its two's-complement binary representation are counted.
The result is always a non-negative integer.
For example:
@lisp
@Tabset[28]
(logcount 13) @EV 3@\;@r[Binary representation is] ...0001101
(logcount -13) @EV 2@\;@r[Binary representation is] ...1110011
(logcount 30) @EV 4@\;@r[Binary representation is] ...0011110
(logcount -30) @EV 4@\;@r[Binary representation is] ...1100010
@Endlisp
The following identity always holds:
@Lisp
(logcount x) @EQ (logcount (- (+ x 1)))
             @EQ (logcount (lognot x))
@Endlisp
@Enddefun

@Defun[Fun {integer-length⎇, Args {@i[integer]⎇]
This function performs the computation
@Begin[Format]
@tabclear
@=@i[ceiling](log@-[2](@b[if] @i[integer] < 0 @b[then] @minussign@i[integer] @b[else] @i[integer]+1))
@End[Format]
This is useful in two different ways.
First, if @i[integer] is non-negative, then its value can be represented
in unsigned binary form in a field whose width in bits is
no smaller than @f[(integer-length @i[integer])].
Second, regardless of the sign of @i[integer], its value can be
represented in signed binary two's-complement form in a field
whose width in bits is no smaller than @f[(+ (integer-length @i[integer]) 1)].
For example:
@lisp
(integer-length 0) @EV 0
(integer-length 1) @EV 1
(integer-length 3) @EV 2
(integer-length 4) @EV 3
(integer-length 7) @EV 3
(integer-length -1) @EV 0
(integer-length -4) @EV 2
(integer-length -7) @EV 3
(integer-length -8) @EV 3
@Endlisp
@Incompatibility{This function is similar to the @maclisp
function @f[haulong].  One may define @f[haulong] as
@lisp
(haulong x) @EQ (integer-length (abs x))
@endlisp
⎇
@Enddefun


@Section[Byte Manipulation Functions]

Several functions are provided for dealing with an arbitrary-width field of
contiguous bits appearing anywhere in an integer.
Such a contiguous set of bits is called a @Def[byte].
Here the term @i[byte] does not imply some fixed number of bits
(such as eight), rather a field of arbitrary and user-specifiable width.

The byte-manipulation functions use objects called @Def[byte specifiers] to
designate a specific byte position within an integer.
The representation of a byte specifier is implementation-dependent;
in particular, it may or may not be a number.
It is sufficient to know that the function @f[byte] will construct one,
and that the byte-manipulation functions will accept them.
The function @f[byte] accepts two integers representing
the @i[position] and @i[size] of the byte and returns
a byte specifier.
@Index2[P {size⎇, S {of a byte⎇]
@Index2[P {position⎇, S {of a byte⎇]
Such a specifier designates a byte whose width is @i[size]
and whose bits have weights 2@+[@superi[position]+@superi[size]@superminussign@;1] 
through 2@+[@superi[position]].

@Defun[Fun {byte⎇, Args {@i[size] @i[position]⎇]
@f[byte] takes two integers representing the size and position
of a byte and returns a byte specifier suitable for use
as an argument to byte-manipulation functions.
@Enddefun

@Defun[Fun {byte-size⎇, Args {@i[bytespec]⎇]
@Defun1[Fun {byte-position⎇, Args {@i[bytespec]⎇]
Given a byte specifier, @f[byte-size] returns the size specified as an
integer; @f[byte-position] similarly returns the position.
For example:
@lisp
(byte-size (byte @i[j] @i[k])) @EQ @i[j]
(byte-position (byte @i[j] @i[k])) @EQ @i[k]
@Endlisp
@Enddefun

@Defun[Fun {ldb⎇, Args {@i[bytespec] @i[integer]⎇]
@i[bytespec] specifies a byte of @i[integer] to be extracted.
The result is returned as a positive integer.
For example:
@lisp
(logbitp @i[j] (ldb (byte @i[s] @i[p]) @i[n])
   @EQ (and (< @i[j] @i[s]) (logbitp (+ @i[j] @i[p]) @i[n]))
@Endlisp
The name of the function @f[ldb] means ``load byte.''
@Incompatibility{The @maclisp function @f[haipart] can be
implemented in terms of @f[ldb] as follows:
@Lisp
(defun haipart (integer count)
  (let ((x (abs integer)))
    (if (minusp count)
	(ldb (byte (- count) 0) x)
	(ldb (byte count (max 0 (- (integer-length x) n)))
	     x))))
@Endlisp⎇

If the argument @i[integer] is specified by a form that is a @i[place] form
acceptable to @Macref[setf], then
@f[setf] may be used with @f[ldb] to modify
a byte within the integer that is stored
in that @i[place].
The effect is to perform a @Funref[dpb] operation
and then store the result back into the @i[place].
@Enddefun

@Defun[Fun {ldb-test⎇, Args {@i[bytespec] @i[integer]⎇]
@f[ldb-test] is a predicate that is true if any of
the bits designated by the byte specifier @i[bytespec] are 1's in @i[integer];
that is, it is true if the designated field is non-zero.
@Lisp
(ldb-test @i[bytespec] @i[n]) @EQ (not (zerop (ldb @i[bytespec] @i[n])))
@Endlisp
@Enddefun

@Defun[Fun {mask-field⎇, Args {@i[bytespec] @i[integer]⎇]
This is similar to @f[ldb]; however, the result contains
the specified byte
of @i[integer] in the position specified by @i[bytespec],
rather than in position 0 as with @f[ldb].
The result therefore agrees with @i[integer] in the byte specified
but has zero-bits everywhere else.
For example:
@lisp
(ldb @i[bs] (mask-field @i[bs] @i[n])) @EQ (ldb @i[bs] @i[n])

(logbitp @i[j] (mask-field (byte @i[s] @i[p]) @i[n]))
   @EQ (and (>= @i[j] @i[p]) (< @i[j] @i[s]) (logbitp @i[j] @i[n]))

(mask-field @i[bs] @i[n]) @EQ (logand @i[n] (dpb -1 @i[bs] 0))
@Endlisp

If the argument @i[integer] is specified by a form that is a @i[place] form
acceptable to @Macref[setf],
then @f[setf] may be used with @f[mask-field]
to modify a byte within the integer that is stored
in that @i[place].
The effect is to perform a @Funref[deposit-field] operation
and then store the result back into the @i[place].
@Enddefun

@Defun[Fun {dpb⎇, Args {@i[newbyte] @i[bytespec] @i[integer]⎇]
This returns a number that is the same as @i[integer] except in the
bits specified by @i[bytespec].  Let @i[s] be the size specified
by @i[bytespec]; then the low @i[s] bits of @i[newbyte] appear in
the result in the byte specified by @i[bytespec].
The integer @i[newbyte] is therefore interpreted as
being right-justified, as if it were the result of @f[ldb].
For example:
@lisp
(logbitp @i[j] (dpb @i[m] (byte @i[s] @i[p]) @i[n]))
  @eq (if @↑(and (>= @i[j] @i[p]) (< @i[j] (+ @i[p] @i[s])))
@\(logbitp (- @i[j] @i[p]) @i[m])
@\(logbitp @i[j] @i[n]))
@Endlisp
The name of the function @f[dpb] means ``deposit byte.''
@Enddefun

@Defun[Fun {deposit-field⎇, Args {@i[newbyte] @i[bytespec] @i[integer]⎇]
This function is to @f[mask-field] as @f[dpb] is to @f[ldb].
The result is an integer that contains the bits of @i[newbyte]
within the byte specified by @i[bytespec], and elsewhere contains the bits
of @i[integer].
For example:
@lisp
(logbitp @i[j] (dpb @i[m] (byte @i[s] @i[p]) @i[n]))
   @EQ (if @↑(and (>= @i[j] @i[p]) (< @i[j] (+ @i[p] @i[s])))
@\(logbitp @i[j] @i[m])
@\(logbitp @i[j] @i[n]))
@Endlisp
@Implementation{If the @i[bytespec] is a constant, one may of course
construct, at compile time, an equivalent mask @i[m], for example
by computing @f[(deposit-field -1 @i[bytespec] 0)].  Given
this mask @i[m], one may then compute
@lisp
(deposit-field @i[newbyte] @i[bytespec] @i[integer])
@Endlisp
by computing
@Lisp
(logior (logand @i[newbyte] @i[m]) (logand @i[integer] (lognot @i[m])))
@Endlisp
where the result of @f[(lognot @i[m])] can of course also be computed
at compile time.  However, the following expression
may also be used and may require fewer
temporary registers in some situations:
@Lisp
(logxor @i[integer] (logand @i[m] (logxor @i[integer] @i[newbyte])))
@Endlisp
A related, though possibly less useful, trick is that
@Lisp
(let ((z (logand (logxor x y) m)))
  (setq x (logxor z x))
  (setq y (logxor z y)))
@Endlisp
interchanges those bits of @f[x] and @f[y] for which the mask @f[m] is
@f[1], and leaves alone those bits of @f[x] and @f[y] for which @f[m] is
@f[0].⎇
@Enddefun

@Section[Random Numbers]
@label[RANDOM]

The @clisp facility for generating pseudo-random numbers has
been carefully defined to make its use reasonably portable.
While two implementations may produce different series
of pseudo-random numbers, the distribution of values should
be relatively independent of such machine-dependent aspects
as word size.

@Defun[Fun {random⎇, Args {@i[number] @optional @i[state]⎇]
@f[(random @i[n])] accepts a positive number @i[n] and returns
a number of the same kind between zero (inclusive) and @i[n] (exclusive).
The number @i[n] may be an integer or a floating-point number.
An approximately uniform choice distribution is used.
If @i[n] is an integer, each of the possible results
occurs with (approximate) probability 1/@i[n].
(The qualifier ``approximate'' is used because of implementation
considerations; in practice, the deviation from uniformity should be
quite small.)

The argument @i[state] must be an object of type @f[random-state];
it defaults to the value of the variable @Var[random-state].
This object is used to maintain the state of the pseudo-random-number
generator and is altered as a side effect of the @f[random] operation.

@Incompatibility{@f[random] of zero arguments as defined in @maclisp
has been omitted because
its value is too implementation-dependent (limited by fixnum range).⎇

@Implementation{In general, even if @f[random] of zero arguments
were defined as in @maclisp,
it is not adequate to define @f[(random @i[n])] for integral @i[n]
to be simply @f[(mod (random) @i[n])]; this fails to be uniformly distributed
if @i[n] is larger than the largest number produced by @f[random],
or even if @i[n] merely approaches this number.
This is another reason for omitting @f[random] of zero arguments in @clisp.
Assuming that the underlying mechanism produces ``random bits''
(possibly in chunks such as fixnums), the best approach is to produce
enough random bits to construct an integer @i[k] some number @i[d] of bits
larger than @f[(integer-length @i[n])] (see @Funref[integer-length]), and
then compute @f[(mod @i[k] @i[n])].  The quantity @i[d] should be at
least 7, and preferably 10 or more.

To produce random floating-point numbers in the half-open
range @lbracket@;@i[A], @i[B]),
accepted practice (as determined by a look through the
@i[Collected Algorithms from the ACM], particularly algorithms
133, 266, 294, and 370) is to compute @i[X]@centerdot@;(@i[B]@Minussign@;@i[A])+@i[A],
where @i[X] is a floating-point number uniformly distributed over
@lbracket@;0.0, 1.0)
and computed by calculating a random integer @i[N] in the range
@lbracket@;0, @i[M])
(typically by a multiplicative-congruential or linear-congruential method
mod @i[M]) and then setting @i[X]=@i[N]/@i[M].  See also @Cite[KNUTH-VOLUME-2].
If one takes @i[M] = 2@+[@superi[f]], where @i[f] is the length of the significand
of a floating-point number (and it is in fact common to choose @i[M]
to be a power of two), then this method is equivalent to the following
assembly-language-level procedure.  Assume the representation
has no hidden bit.  Take a floating-point 0.5,
and clobber its entire significand with random bits.  Normalize the
result if necessary.

For example, on the PDP-10, assume that accumulator @f[T] is completely random
(all 36 bits are random).  Then the code sequence
@Lisp
LSH T,-9	;@r[Clear high 9 bits; low 27 are random.]
FSC T,128.	;@r[Install exponent and normalize.]
@Endlisp
will produce in @f[T] a random floating-point number uniformly distributed
over @lbracket@;0.0, 1.0).  (Instead of the @f[LSH] instruction,
one could do
@lisp
TLZ T,777000	;@r[That's 777000 octal.]
@endlisp
but if the 36 random bits came from a congruential random-number generator,
the high-order bits tend to be ``more random'' than the low-order ones,
and so the @f[LSH] would be better for uniform distribution.
Ideally all the bits would be the result of high-quality randomness.)

With a hidden-bit representation, normalization is not a problem,
but dealing with the hidden bit is.  The method can be adapted as follows.
Take a floating-point 1.0 and clobber the explicit significand bits with
random bits; this produces a random floating-point number in
the range @lbracket@;1.0, 2.0).  Then simply subtract 1.0.  In effect, we
let the hidden bit creep in and then subtract it away again.

For example, on the @c[VAX], assume that register @f[T] is
completely random (but a little less random than on the PDP-10, as
it has only 32 random bits).  Then the code sequence
@Lisp
INSV #↑X81,#7,#9,T	;@r[Install correct sign bit and exponent.]
SUBF #↑F1.0,T		;@r[Subtract 1.0.]
@Endlisp
will produce in @f[T] a random floating-point number uniformly distributed
over @lbracket@;0.0, 1.0).  Again, if the low-order bits are not random enough,
then the instruction
@lisp
ROTL #7,T
@endlisp
should be performed first.

Implementors may wish to consult reference @cite[ADDITIVE-RANDOMS] for
a discussion of some efficient methods of generating pseudo-random numbers.⎇
@Enddefun

@Defvar[Var {random-state⎇]
This variable holds a data structure,
an object of type @f[random-state], that encodes the internal state
of the random-number generator that @f[random] uses by default.
The nature
of this data structure is implementation-dependent.  It may be
printed out and successfully read back in, but may or may not function
correctly as a random-number state object in another implementation.
A call to @f[random] will perform a side effect on this data structure.
Lambda-binding this variable to a different random-number state object
will correctly save and restore the old state object, of course.
@Enddefvar

@Defun[Fun {make-random-state⎇, Args {@optional @i[state]⎇]
This function returns a new object of type @f[random-state],
suitable for use as the value of the variable @Var[random-state].
If @i[state] is @false or omitted, @f[random-state] returns a @i[copy]
of the current random-number state object (the value of
the variable @Var[random-state]).  If @i[state] is a state object,
a copy of that state object is returned.  If @i[state] is @true,
then a new state object is returned that has been ``randomly''
initialized by some means (such as by a time-of-day clock).
@Rationale{@clisp purposely provides no way to initialize a @f[random-state]
object from a user-specified ``seed.''  The reason for this is that
the number of bits of state information in a @f[random-state] object
may vary widely from one implementation to another, and there is no
simple way to guarantee that any user-specified seed value will be
``random enough.''  Instead, the initialization of @f[random-state]
objects is left to the implementor in the case where the argument @true
is given to @f[make-random-state].

To handle the common situation of executing the same program many times
in a reproducible manner, where that program uses @f[random], the following
procedure may be used:
@Begin[Enumerate]
Evaluate @f[(make-random-state t)] to create a @f[random-state] object.

Write that object to a file, using @Funref[print], for later use.

Whenever the program is to be run, first use @Funref[read] to create
a copy of the @f[random-state] object from the printed representation
in the file.
Then use the @f[random-state] object newly created by the @f[read] operation
to initialize the random-number generator for the program.
@End[Enumerate]
It is for the sake of this procedure for reproducible execution that
implementations are required to provide a read/print syntax for objects
of type @f[random-state].

It is also possible to make copies of a @f[random-state] object
directly without going through the print/read process, simply by
using the @f[random-state] function to copy the object; this allows
the same sequence of random numbers to be generated many times
within a single program.⎇

@Implementation{A recommended way to implement the type @f[random-state]
is effectively to use the machinery for @macref[defstruct].
The usual structure syntax may then be used for printing @f[random-state]
objects; one might look something like
@lisp
#S(RANDOM-STATE DATA #(14 49 98436589 786345 8734658324 ...))
@endlisp
where the components are of course completely implementation-dependent.⎇
@Enddefun

@Defun[Fun {random-state-p⎇, Args {@i[object]⎇]
@Index2[P {random-state⎇, S {predicate⎇]
@f[random-state-p] is true if its argument is a random-state object,
and otherwise is false.
@Lisp
(random-state-p x) @EQ (typep x 'random-state)
@Endlisp
@Enddefun

@Section[Implementation Parameters]

The values of the named constants defined in this section are
implementation-dependent.  They may be useful for parameterizing
code in some situations.

@Defcon[Var {most-positive-fixnum⎇]
@Defcon1[Var {most-negative-fixnum⎇]
The value of @f[most-positive-fixnum] is that fixnum closest in value to
positive infinity provided by the implementation.

The value of @f[most-negative-fixnum] is that fixnum closest in value to
negative infinity provided by the implementation.
@Enddefcon

@Defcon[Var {most-positive-short-float⎇]
@Defcon1[Var {least-positive-short-float⎇]
@Defcon1[Var {least-negative-short-float⎇]
@Defcon1[Var {most-negative-short-float⎇]
The value of @f[most-positive-short-float] is that short-format
floating-point number closest in value to (but not equal to)
positive infinity provided by the implementation.

The value of @f[least-positive-short-float] is that positive short-format
floating-point number closest in value to (but not equal to) zero provided by
the implementation.

The value of @f[least-negative-short-float] is that negative short-format
floating-point number closest in value to (but not equal to) zero provided by
the implementation.  (Note that even if an implementation supports
minus zero as a distinct short floating-point value,
@f[least-negative-short-float] must not be minus zero.)

The value of @f[most-negative-short-float] is that short-format
floating-point number closest in value to (but not equal to)
negative infinity provided by the implementation.
@Enddefcon


@Defcon[Var {most-positive-single-float⎇]
@Defcon1[Var {least-positive-single-float⎇]
@Defcon1[Var {least-negative-single-float⎇]
@Defcon1[Var {most-negative-single-float⎇]
@Defcon1[Var {most-positive-double-float⎇]
@Defcon1[Var {least-positive-double-float⎇]
@Defcon1[Var {least-negative-double-float⎇]
@Defcon1[Var {most-negative-double-float⎇]
@Defcon1[Var {most-positive-long-float⎇]
@Defcon1[Var {least-positive-long-float⎇]
@Defcon1[Var {least-negative-long-float⎇]
@Defcon1[Var {most-negative-long-float⎇]
These are analogous to the constants defined above for short-format
floating-point numbers.
@Enddefcon

@Defcon[Var {short-float-epsilon⎇]
@Defcon1[Var {single-float-epsilon⎇]
@Defcon1[Var {double-float-epsilon⎇]
@Defcon1[Var {long-float-epsilon⎇]
These constants have as value, for each floating-point format,
the smallest positive floating-point number @i[e] of that format such that
the expression
@Lisp
(not (= (float 1 @i[e]) (+ (float 1 @i[e]) @i[e])))
@Endlisp
is true when actually evaluated.
@Enddefcon

@Defcon[Var {short-float-negative-epsilon⎇]
@Defcon1[Var {single-float-negative-epsilon⎇]
@Defcon1[Var {double-float-negative-epsilon⎇]
@Defcon1[Var {long-float-negative-epsilon⎇]
These constants have as value, for each floating-point format,
the smallest positive floating-point number @i[e] of that format such that
the expression
@Lisp
(not (= (float 1 @i[e]) (- (float 1 @i[e]) @i[e])))
@Endlisp
is true when actually evaluated.
@Enddefcon